-
Couldn't load subscription status.
- Fork 29
Reupload exported editable mapping annotation zip #8969
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
📝 WalkthroughWalkthroughAdds server-side import for editable-mapping (proofreading) annotations from uploaded ZIPs, propagates editable-mapping metadata through upload/NML parsing, introduces tracing-store endpoints and IO to initialize mappings from ZIPs, refactors chunk-cache to trait + DS/TS implementations, adds boolean datastore/Zarr support, config/i18n entries, and small UI/log fixes. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related issues
Possibly related PRs
Suggested reviewers
Poem
Pre-merge checks and finishing touches✅ Passed checks (5 passed)
✨ Finishing touches🧪 Generate unit tests (beta)
📜 Recent review detailsConfiguration used: CodeRabbit UI Review profile: CHILL Plan: Pro 📒 Files selected for processing (2)
🚧 Files skipped from review as they are similar to previous changes (2)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
…for multi-layer proofreading
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/zarr3/Zarr3ArrayHeader.scala (1)
31-31: Clarify comment about boolean fill_value support.The comment states "Reading boolean datasets is not supported" but the parsing logic (lines 178-189) now reads boolean fill_value from JSON and stores it as a string. Consider updating the comment to reflect this capability more clearly.
Apply this diff to clarify the comment:
- fill_value: Either[String, Number], // Reading boolean datasets is not supported. When writing boolean, true and false literals will be stored in the string. + fill_value: Either[String, Number], // Boolean fill_value is read from JSON and stored as string ("true" or "false"). When writing boolean data type, fill_value is serialized as JSON boolean.webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/MultiArrayUtils.scala (1)
53-61: Handle boolean datasets increateEmpty
ArrayDataType.boolnow exists and the rest of the stack can construct boolean chunks, butcreateEmptystill has no branch for it. CallingcreateEmpty(ArrayDataType.bool, …)will hit the default path and throw aMatchError, breaking workflows that expect an empty boolean MultiArray.def createEmpty(dataType: ArrayDataType, rank: Int): MultiArray = { val datyTypeMA = dataType match { case ArrayDataType.i1 | ArrayDataType.u1 => MADataType.BYTE case ArrayDataType.i2 | ArrayDataType.u2 => MADataType.SHORT case ArrayDataType.i4 | ArrayDataType.u4 => MADataType.INT case ArrayDataType.i8 | ArrayDataType.u8 => MADataType.LONG case ArrayDataType.f4 => MADataType.FLOAT case ArrayDataType.f8 => MADataType.DOUBLE + case ArrayDataType.bool => MADataType.BOOLEAN } MultiArray.factory(datyTypeMA, Array.fill(rank)(0)) }
🧹 Nitpick comments (1)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/zarr3/Zarr3ArrayHeader.scala (1)
178-189: Simplify flatMap to map for cleaner code.The
flatMap(value => JsSuccess(...))pattern is unnecessarily verbose. Since you're already in a for-comprehension that expectsJsResult, you can usemapto transform the value directly.Apply this diff to simplify the fill_value parsing:
- fill_value <- (fill_value_raw.validate[String], - fill_value_raw.validate[Number], - fill_value_raw.validate[Boolean]) match { - case (asStr: JsSuccess[String], _, _) => - asStr.flatMap(value => JsSuccess[Either[String, Number]](Left(value))) - case (_, asNum: JsSuccess[Number], _) => - asNum.flatMap(value => JsSuccess[Either[String, Number]](Right(value))) - case (_, _, asBool: JsSuccess[Boolean]) => - asBool.flatMap(value => JsSuccess[Either[String, Number]](Left(value.toString))) - case _ => JsError("Could not parse fill_value as string, number or boolean value.") - } + fill_value <- (fill_value_raw.validate[String], + fill_value_raw.validate[Number], + fill_value_raw.validate[Boolean]) match { + case (asStr: JsSuccess[String], _, _) => asStr.map(Left(_)) + case (_, asNum: JsSuccess[Number], _) => asNum.map(Right(_)) + case (_, _, asBool: JsSuccess[Boolean]) => asBool.map(v => Left(v.toString)) + case _ => JsError("Could not parse fill_value as string, number or boolean value.") + }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (32)
app/Startup.scala(2 hunks)app/controllers/AnnotationIOController.scala(3 hunks)app/models/annotation/AnnotationUploadService.scala(1 hunks)app/models/annotation/WKRemoteTracingStoreClient.scala(1 hunks)app/models/annotation/nml/NmlParser.scala(2 hunks)app/models/annotation/nml/NmlVolumeTag.scala(1 hunks)conf/application.conf(1 hunks)conf/messages(1 hunks)frontend/javascripts/viewer/view/version_entry.tsx(2 hunks)unreleased_changes/8969.md(1 hunks)webknossos-datastore/app/com/scalableminds/webknossos/datastore/DataStoreModule.scala(1 hunks)webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/ArrayDataType.scala(1 hunks)webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/ChunkTyper.scala(4 hunks)webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/DatasetHeader.scala(1 hunks)webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/MultiArrayUtils.scala(2 hunks)webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/zarr3/Zarr3ArrayHeader.scala(3 hunks)webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/zarr3/Zarr3DataType.scala(1 hunks)webknossos-datastore/app/com/scalableminds/webknossos/datastore/rpc/RPCRequest.scala(1 hunks)webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/BinaryDataServiceHolder.scala(1 hunks)webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/ChunkCacheService.scala(2 hunks)webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/connectome/ZarrConnectomeFileService.scala(2 hunks)webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mapping/ZarrAgglomerateService.scala(2 hunks)webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/ZarrMeshFileService.scala(2 hunks)webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/segmentindex/ZarrSegmentIndexFileService.scala(2 hunks)webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/TSChunkCacheService.scala(1 hunks)webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/TracingStoreConfig.scala(1 hunks)webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/TracingStoreModule.scala(1 hunks)webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/controllers/EditableMappingController.scala(4 hunks)webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/files/TempFileService.scala(1 hunks)webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/tracings/editablemapping/EditableMappingIOService.scala(2 hunks)webknossos-tracingstore/conf/standalone-tracingstore.conf(1 hunks)webknossos-tracingstore/conf/tracingstore.latest.routes(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (17)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/zarr3/Zarr3DataType.scala (1)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/ArrayDataType.scala (1)
ArrayDataType(5-82)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/DatasetHeader.scala (2)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/dataformats/wkw/WKWHeader.scala (1)
fill_value(95-95)webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/precomputed/PrecomputedHeader.scala (1)
fill_value(54-54)
webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/TracingStoreConfig.scala (2)
app/utils/WkConf.scala (1)
Cache(74-80)webknossos-datastore/app/com/scalableminds/webknossos/datastore/DataStoreConfig.scala (2)
Cache(32-46)Redis(51-54)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/segmentindex/ZarrSegmentIndexFileService.scala (1)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/ChunkCacheService.scala (1)
DSChunkCacheService(28-30)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/ZarrMeshFileService.scala (1)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/ChunkCacheService.scala (1)
DSChunkCacheService(28-30)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/connectome/ZarrConnectomeFileService.scala (1)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/ChunkCacheService.scala (1)
DSChunkCacheService(28-30)
app/models/annotation/WKRemoteTracingStoreClient.scala (3)
util/src/main/scala/com/scalableminds/util/tools/Fox.scala (7)
s(236-240)s(240-250)s(250-259)Fox(30-230)Fox(232-305)successful(53-56)failure(58-62)webknossos-datastore/app/com/scalableminds/webknossos/datastore/rpc/RPCRequest.scala (2)
addQueryString(28-31)postFileWithJsonResponse(135-138)app/controllers/UserTokenController.scala (1)
RpcTokenHolder(31-39)
app/models/annotation/nml/NmlParser.scala (1)
frontend/javascripts/viewer/model/accessors/volumetracing_accessor.ts (1)
hasEditableMapping(625-634)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/ChunkTyper.scala (2)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/ArrayDataType.scala (1)
ArrayDataType(5-82)webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/MultiArrayUtils.scala (2)
MultiArrayUtils(11-161)createFilledArray(32-51)
app/controllers/AnnotationIOController.scala (3)
util/src/main/scala/com/scalableminds/util/tools/Fox.scala (8)
map(281-284)map(377-377)Fox(30-230)Fox(232-305)successful(53-56)failure(58-62)serialCombined(95-99)serialCombined(99-111)webknossos-datastore/app/com/scalableminds/webknossos/datastore/models/annotation/AnnotationLayer.scala (5)
toProto(19-21)AnnotationLayer(13-21)AnnotationLayer(23-51)AnnotationLayerStatistics(53-70)unknown(69-69)app/models/annotation/WKRemoteTracingStoreClient.scala (2)
saveEditableMappingIfPresent(252-269)saveVolumeTracing(227-250)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/ChunkCacheService.scala (1)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/DataStoreConfig.scala (1)
ImageArrayChunks(36-38)
webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/files/TempFileService.scala (1)
util/src/main/scala/com/scalableminds/util/time/Instant.scala (1)
now(48-48)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/MultiArrayUtils.scala (1)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/ArrayDataType.scala (1)
ArrayDataType(5-82)
webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/controllers/EditableMappingController.scala (3)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/AccessTokenService.scala (4)
webknossos(73-75)validateAccessFromTokenContext(90-97)UserAccessRequest(30-30)UserAccessRequest(31-75)util/src/main/scala/com/scalableminds/util/time/Instant.scala (2)
now(48-48)logSince(72-75)webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/tracings/editablemapping/EditableMappingIOService.scala (1)
initializeFromUploadedZip(131-159)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mapping/ZarrAgglomerateService.scala (1)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/ChunkCacheService.scala (1)
DSChunkCacheService(28-30)
webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/TSChunkCacheService.scala (1)
webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/TracingStoreConfig.scala (2)
Tracingstore(14-32)Cache(28-30)
webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/tracings/editablemapping/EditableMappingIOService.scala (9)
util/src/main/scala/com/scalableminds/util/io/ZipIO.scala (2)
ZipIO(15-309)zip(70-87)webknossos-datastore/app/com/scalableminds/webknossos/datastore/datavault/VaultPath.scala (1)
VaultPath(19-100)webknossos-datastore/app/com/scalableminds/webknossos/datastore/helpers/UPath.scala (2)
UPath(54-96)fromLocalPath(80-80)webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/files/TsTempFileService.scala (1)
TsTempFileService(8-11)webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/tracings/TracingDataStore.scala (1)
TracingDataStore(13-66)webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/tracings/editablemapping/EditableMappingService.scala (1)
create(134-141)webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/files/TempFileService.scala (1)
createDirectory(42-49)webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/zarr3/Zarr3Array.scala (3)
Zarr3Array(16-40)Zarr3Array(42-184)open(18-40)webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/DatasetArray.scala (1)
readAsMultiArray(189-192)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: build-smoketest-push
- GitHub Check: backend-tests
🔇 Additional comments (21)
app/Startup.scala (1)
101-101: LGTM! Log message standardization.The Unicode ellipsis (…) has been replaced with ASCII three-dot ellipsis (...) in both log messages. This standardization improves consistency and avoids potential encoding issues.
Also applies to: 118-118
frontend/javascripts/viewer/view/version_entry.tsx (1)
170-170: LGTM! Correctness fix for segment identifier display.The change correctly aligns
segment2DescriptionwithsegmentId2instead of incorrectly referencingsegmentId1. This ensures that version history entries accurately reflect which segments were involved in split and merge agglomerate operations.Also applies to: 183-183
webknossos-datastore/app/com/scalableminds/webknossos/datastore/rpc/RPCRequest.scala (1)
135-138: LGTM! Method follows existing patterns.The implementation correctly mirrors the existing
postFilemethod (lines 130-133) while adding JSON response parsing, consistent with other*WithJsonResponsemethods in the class. The type parameter and return type are appropriate.Note: Like the existing
postFilemethod, this doesn't explicitly set a Content-Type header. Play's WSClient may set this automatically based on the file, but if the server expects a specific content type (e.g.,application/zipfor the editable mapping uploads mentioned in the PR), you may want to verify the header is correctly propagated.webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/BinaryDataServiceHolder.scala (1)
21-21: LGTM: DI type update aligns with trait refactoring.The parameter type update from
ChunkCacheServicetoDSChunkCacheServiceis consistent with the trait-based cache service refactoring across the datastore module.webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mapping/ZarrAgglomerateService.scala (1)
16-16: LGTM: Consistent DI migration to DSChunkCacheService.The import and constructor parameter updates align with the trait-based refactoring of the chunk cache service.
Also applies to: 28-28
webknossos-datastore/app/com/scalableminds/webknossos/datastore/DataStoreModule.scala (1)
64-64: LGTM: DI binding correctly updated.The binding change from
ChunkCacheServicetoDSChunkCacheServiceensures the concrete implementation is wired as an eager singleton, consistent with the trait-based refactoring.webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/ZarrMeshFileService.scala (1)
13-13: LGTM: DI type update consistent with refactoring.The import and constructor changes align with the DSChunkCacheService migration across the datastore module.
Also applies to: 62-62
webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/connectome/ZarrConnectomeFileService.scala (1)
11-11: LGTM: Consistent constructor dependency update.The migration to
DSChunkCacheServicefollows the established pattern across the datastore services.Also applies to: 45-45
webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/segmentindex/ZarrSegmentIndexFileService.scala (1)
13-13: LGTM: DI update aligns with trait-based refactoring.The constructor dependency change to
DSChunkCacheServiceis consistent with the module-wide cache service migration.Also applies to: 53-53
webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/ChunkCacheService.scala (2)
9-26: LGTM: Clean trait extraction improves flexibility.The refactoring from a concrete class to a trait with
protected val maxSizeBytesenables different implementations while preserving the cache initialization logic. This design improves testability and follows the dependency inversion principle.
28-30: LGTM: DSChunkCacheService provides datastore-specific sizing.The concrete implementation correctly sources
maxSizeBytesfrom the datastore configuration, maintaining the original behavior while enabling the trait-based design.webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/zarr3/Zarr3ArrayHeader.scala (1)
46-50: No changes required for byteOrder detection
Option.contains("big")correctly matches the only big-endian value ("big"), and no other variants exist in the codebase.conf/application.conf (1)
203-203: LGTM!The new chunk cache configuration is properly documented and the 2 GB limit aligns with the datastore's image array cache configuration. This will be consumed by the new TSChunkCacheService.
webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/TracingStoreModule.scala (1)
28-28: LGTM!The DI binding for TSChunkCacheService follows the established pattern for service registration and is correctly configured as an eager singleton.
conf/messages (1)
270-270: LGTM!The error message is clear, grammatically correct, and provides actionable information about what's missing (file or baseMappingName).
webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/TracingStoreConfig.scala (1)
28-31: LGTM!The new Cache configuration object follows the same pattern as existing configuration objects (Redis, Fossildb, WebKnossos) and correctly defines the chunk cache size parameter. The children list is properly updated to include the new Cache object.
webknossos-tracingstore/conf/standalone-tracingstore.conf (1)
64-64: LGTM!The standalone tracingstore configuration correctly mirrors the chunk cache setting from application.conf, ensuring consistent behavior across deployment modes.
webknossos-tracingstore/conf/tracingstore.latest.routes (1)
43-43: LGTM!The new route for saving editable mappings from ZIP follows RESTful conventions and is properly positioned within the Editable Mappings section. The parameter types are appropriate for the operation.
webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/files/TempFileService.scala (2)
42-49: LGTM!The new
createDirectorymethod follows the same pattern as the existingcreatemethod, with appropriate use ofFiles.createDirectoryand proper tracking inactiveTempFiles.
56-59: LGTM!The updated cleanup logic correctly handles both files and directories. Using
FileUtils.deleteDirectoryfor directories ensures recursive deletion of directory contents, while regular files continue usingFiles.delete. This prevents exceptions that would occur ifFiles.deletewere called on a non-empty directory.webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/TSChunkCacheService.scala (1)
6-7: LGTM!The TSChunkCacheService correctly extends ChunkCacheService and derives
maxSizeBytesfrom the configuration. The DI wiring with@Injectis proper, and the protected visibility ofmaxSizeBytesmatches the contract of the parent class.
webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/ArrayDataType.scala
Show resolved
Hide resolved
...calableminds/webknossos/tracingstore/tracings/editablemapping/EditableMappingIOService.scala
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (1)
webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/tracings/editablemapping/EditableMappingIOService.scala (1)
131-157: Return final persisted version, not just the count (and keep grouping materialized)Iterator exhaustion is fixed via toSeq, but returning the count alone can mislead callers. Prefer returning the actual last version written so follow-up uploads can chain correctly.
- updatesGrouped = updateActions.grouped(100).toSeq + updatesGrouped = updateActions.grouped(100).toSeq _ <- Fox.serialCombined(updatesGrouped.zipWithIndex) { case (updateGroup: Seq[UpdateAction], updateGroupIndex) => tracingDataStore.annotationUpdates.put(annotationId.toString, - startVersion + updateGroupIndex, + startVersion + updateGroupIndex, Json.toJson(updateGroup)) } - numberOfSavedVersions = updatesGrouped.length - } yield numberOfSavedVersions + finalVersion = + if (updatesGrouped.isEmpty) startVersion - 1 else startVersion + updatesGrouped.size - 1 + } yield finalVersionIf the API intentionally needs “number of saved versions”, please align the controller/clients accordingly and rename variables to avoid confusion.
🧹 Nitpick comments (1)
webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/tracings/editablemapping/EditableMappingIOService.scala (1)
186-189: Validate shape consistency between arraysEnsure edgeIsAddition length matches edges’ first dimension to avoid silent truncation/mismatch.
- numEdges <- editedEdgesZarrArray.datasetShape.flatMap(_.headOption).toFox + numEdges <- editedEdgesZarrArray.datasetShape.flatMap(_.headOption).toFox + numAdds <- edgeIsAdditionZarrArray.datasetShape.flatMap(_.headOption).toFox + _ <- Fox.fromBool(numEdges == numAdds) ?~> "editableMappingFromZip.edgeIsAddition.length.mismatch"
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
app/controllers/AnnotationIOController.scala(3 hunks)webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/controllers/EditableMappingController.scala(4 hunks)webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/tracings/editablemapping/EditableMappingIOService.scala(2 hunks)
🔇 Additional comments (8)
app/controllers/AnnotationIOController.scala (5)
135-153: LGTM! Return value properly destructured and propagated.The changes correctly handle the new tuple return type from
mergeAndSaveVolumeLayers, destructuring it intomergedVolumeLayersandearliestAccessibleVersion. TheearliestAccessibleVersionis appropriately passed to theAnnotationProtoconstructor, replacing the previous constant0L.
171-173: LGTM! Function signature and empty case correctly updated.The return type change to
Fox[(List[AnnotationLayer], Long)]is consistent with the need to propagate the earliest accessible version. Returning0Lfor the empty case is appropriate, likely indicating no version restrictions when no volume layers are present.
174-181: LGTM! Comprehensive edge-case validations.The three validation checks effectively prevent problematic scenarios:
- Lines 174-176: Prevents duplicate fallback layers within a single annotation using
distinctBy, ensuring each volume layer references a unique segmentation layer.- Lines 177-178: Blocks merging when multiple annotations each contain multiple volume layers, avoiding complex multi-dimensional merge logic.
- Lines 179-181: Prevents merging annotations with editable mapping edges, as merging proofreading history is complex and unsupported.
These validations are well-reasoned and protect against data corruption.
183-218: Past version counter issue successfully resolved!The previous review flagged a critical bug where the version counter would reset when
saveEditableMappingIfPresentreturned0L, corrupting FossilDB history by overwriting earlier updates.The current implementation at line 199 correctly uses addition rather than replacement:
_ = layerUpdatesStartVersionMutable = layerUpdatesStartVersionMutable + numberOfSavedVersionsThis ensures:
- When
numberOfSavedVersions = 0(no editable mapping): counter remains unchanged ✓- When
numberOfSavedVersions > 0: counter advances by the exact number of saved versions ✓The monotonic version ordering is now preserved across all layers, preventing history corruption.
Example trace:
- Layer 1: saves updates 1-5 → counter becomes 6
- Layer 2: no editable mapping → counter stays 6
- Layer 3: saves updates 6-8 → counter becomes 9
Each layer receives a non-overlapping version range, as intended.
218-218: Verify return value consistency across scenarios.There's a potential inconsistency in the
earliestAccessibleVersionreturned:
- Line 218 (single annotation): Returns
layerUpdatesStartVersionMutable, which is1Lif no editable mappings were saved- Line 239 (multiple annotations): Returns
0Leven though no editable mappings can exist (blocked by line 179-181 validation)Question: When no editable mappings are saved, should both scenarios return
0Lto consistently indicate "no version restrictions"?The difference might be:
0L= special sentinel for "all versions accessible, no imported history"1L= "versions start at 1, all accessible"If both values have the same meaning, this inconsistency is harmless. However, if
0Lvs1Lhave different semantics when used to restrict version access elsewhere in the codebase, returning1Lfor a single annotation without editable mappings could inadvertently restrict access.Run the following script to understand how
earliestAccessibleVersionis used:Also applies to: 232-239
webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/controllers/EditableMappingController.scala (2)
34-36: Good extension with KeyValueStoreImplicitsBrings in needed toFox/shiftBox utils; consistent with usage below.
193-211: Harden upload endpoint: parser, logging, cleanup, and return semantics
- Use a raw body parser with explicit maxLength and wrap in log() for consistency and observability.
- Ensure the uploaded temp file is deleted after processing to avoid disk leaks.
- Verify whether the API should return the final persisted version (startVersion + groups - 1) vs just the count; caller expectations may break otherwise.
- Optionally assert that annotationId actually owns tracingId and that the tracing has editableMapping before writing.
Example adjustments:
- def saveFromZip(...): Action[AnyContent] = - Action.async { implicit request => + def saveFromZip(...): Action[AnyContent] = + Action.async(bodyParsers.raw /* consider maxLength */) { implicit request => - accessTokenService.validateAccessFromTokenContext(UserAccessRequest.webknossos) { - for { - editedEdgesZip <- request.body.asRaw.map(_.asFile).toFox ?~> "zipFile.notFound" + log() { + accessTokenService.validateAccessFromTokenContext(UserAccessRequest.webknossos) { + for { + editedEdgesZip <- request.body.asRaw.map(_.asFile).toFox ?~> "zipFile.notFound" before = Instant.now - numberOfSavedVersions <- editableMappingIOService.initializeFromUploadedZip(tracingId, + saved <- editableMappingIOService.initializeFromUploadedZip(tracingId, annotationId, startVersion, baseMappingName, editedEdgesZip) _ = Instant.logSince(before, s"Initializing editable mapping $tracingId from zip") - } yield Ok(Json.toJson(numberOfSavedVersions)) - } + } yield { + // best-effort cleanup of uploaded temp file + try editedEdgesZip.delete() catch { case _: Throwable => () } + Ok(Json.toJson(saved)) + } + } + } }To confirm what callers expect (count vs final version), run:
webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/tracings/editablemapping/EditableMappingIOService.scala (1)
192-225: UpdateAction construction looks correctCorrectly maps additions to Merge and removals to Split with segment IDs, unit mag, and timestamp. Good defaults for optional metadata.
...calableminds/webknossos/tracingstore/tracings/editablemapping/EditableMappingIOService.scala
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Codewise I didn't find much. A few optional improvements and mainly questions :)
Did not test yet though
...calableminds/webknossos/tracingstore/tracings/editablemapping/EditableMappingIOService.scala
Outdated
Show resolved
Hide resolved
| None, | ||
| None, | ||
| None, | ||
| chunkCacheService.sharedChunkContentsCache) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🤔 Here the chunk cache is passed so that the zarr3 array has one as it needs one. But this means as reading the whole array, that without me checking the code, all of the array is loaded into the cache. But imo after the upload this zarr array is never read from again and thus obsolete. Therefore, loading all its content through a cache is as well. So I'd say this is unnecessary and could maybe be replace with a None or so which could turn off the caching in an zarr array or pass a cache which never stores anything 🤔?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, you’re not wrong. I think it is likely we will use the same cache in more spots in the future. Maybe in a future iteration we can also get rid of reading the full array here (though I think it’s usually small enough so that this isn’t a problem). I’d like to avoid changin the interface of AlfuCache nor DatasetArray for this optimization. If it’s ok with you, we could just reduce the datastore default cache size in the config?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok 👍, maybe add a comment to TSChunkCacheService.scala that this cache is pretty much unused at the current state bug might be useful in the future? Just to persist, that we had this discussion and decided to keep the cache althought it doesn't really have a benefit except helping to make the upload of an editbale mapping interface compliant reading a DatasetArray ?
|
|
||
| private def buildUpdateActionFromEdge(edgeSrc: Long, | ||
| edgeDst: Long, | ||
| isAddition: Boolean, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🤔 if "isAddition" is meaning a merging edge, I think it would be more intuitive to call is isMergingEdge or isMerging or so. Because while reading the previous code up until now, I was unsure what "isAddition" and its zarr array meant.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point. I’d like to follow the naming from the file format in this context. However, I added a code comment. I hope it is clearer now?
...calableminds/webknossos/tracingstore/tracings/editablemapping/EditableMappingIOService.scala
Outdated
Show resolved
Hide resolved
| address = "localhost" | ||
| port = 6379 | ||
| } | ||
| cache.chunkCacheMaxSizeBytes = 2000000000 # 2 GB |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As I posted before, I think caching the read results from the uploaded editable mappings doesn't yield any benefit imo. In that case reducing the max cache size could leave more free RAM?! (Or not having a cache at all)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (2)
webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/tracings/editablemapping/EditableMappingIOService.scala (2)
131-159: Iterator exhaustion/off-by-one fixed; confirm return value semanticsMaterializing groups with .toSeq resolves the prior iterator exhaustion. Returning numberOfSavedVersions is fine—please confirm callers expect a count, not the last version number.
160-192: Delete the temporary unzip directory; avoid polluting the shared chunk cacheThe directory from createDirectory() is never removed. Add guaranteed cleanup to prevent disk leaks. Also consider using a dedicated non-shared/throwaway chunk cache for this one-off bulk read to avoid evicting useful entries.
Apply:
private def unzipAndReadZarr(editedEdgesZip: File)(implicit ec: ExecutionContext, tc: TokenContext): Fox[(MultiArray, MultiArray)] = { val unzippedDir = tempFileService.createDirectory() - for { + def cleanup(): Unit = try tempFileService.deleteDirectory(unzippedDir) catch { case _: Throwable => () } + (for { _ <- ZipIO .unzipToDirectory(editedEdgesZip, unzippedDir, includeHiddenFiles = true, List.empty, truncateCommonPrefix = false, excludeFromPrefix = None) .toFox unzippedVaultPath = new VaultPath(UPath.fromLocalPath(unzippedDir), FileSystemDataVault.create) editedEdgesZarrArray <- Zarr3Array.open(unzippedVaultPath / "edges/", DataSourceId("dummy", "unused"), "layer", None, None, None, chunkCacheService.sharedChunkContentsCache) edgeIsAdditionZarrArray <- Zarr3Array.open(unzippedVaultPath / "edgeIsAddition/", DataSourceId("dummy", "unused"), "layer", None, None, None, chunkCacheService.sharedChunkContentsCache) numEdges <- editedEdgesZarrArray.datasetShape.flatMap(_.headOption).toFox _ <- Fox.fromBool(numEdges.toInt.toLong == numEdges) ?~> "editableMappingFromZip.numEdges.exceedsInt" editedEdges <- editedEdgesZarrArray.readAsMultiArray(offset = Array(0L, 0L), shape = Array(numEdges.toInt, 2)) edgeIsAddition <- edgeIsAdditionZarrArray.readAsMultiArray(offset = 0L, shape = numEdges.toInt) - } yield (editedEdges, edgeIsAddition) + } yield (editedEdges, edgeIsAddition)).andThen(_ => cleanup()) }If possible, replace sharedChunkContentsCache with a small/ephemeral cache for this import to avoid cache churn.
🧹 Nitpick comments (1)
webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/tracings/editablemapping/EditableMappingIOService.scala (1)
194-226: Name clarity: isAddition → isMergingEdge (or similar)Rename isAddition to reflect the domain (merge vs split), e.g., isMergingEdge, for readability.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (6)
app/models/annotation/WKRemoteTracingStoreClient.scala(1 hunks)conf/application.conf(1 hunks)conf/messages(1 hunks)webknossos-datastore/app/com/scalableminds/webknossos/datastore/rpc/RPCRequest.scala(1 hunks)webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/controllers/EditableMappingController.scala(4 hunks)webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/tracings/editablemapping/EditableMappingIOService.scala(2 hunks)
🚧 Files skipped from review as they are similar to previous changes (3)
- conf/application.conf
- conf/messages
- webknossos-datastore/app/com/scalableminds/webknossos/datastore/rpc/RPCRequest.scala
🧰 Additional context used
🧬 Code graph analysis (3)
app/models/annotation/WKRemoteTracingStoreClient.scala (3)
util/src/main/scala/com/scalableminds/util/tools/Fox.scala (7)
s(236-240)s(240-250)s(250-259)Fox(30-230)Fox(232-305)successful(53-56)failure(58-62)webknossos-datastore/app/com/scalableminds/webknossos/datastore/rpc/RPCRequest.scala (13)
addQueryParam(30-33)addQueryParam(33-36)addQueryParam(36-39)addQueryParam(39-42)addQueryParam(42-45)addQueryParam(45-50)addQueryParam(50-53)addQueryParam(53-56)addQueryParam(56-59)addQueryParam(59-63)addQueryParam(63-68)addQueryParam(68-71)postFileWithJsonResponse(167-170)app/controllers/UserTokenController.scala (1)
RpcTokenHolder(31-39)
webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/controllers/EditableMappingController.scala (3)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/AccessTokenService.scala (4)
webknossos(70-72)validateAccessFromTokenContext(87-94)UserAccessRequest(30-30)UserAccessRequest(31-72)util/src/main/scala/com/scalableminds/util/time/Instant.scala (2)
now(48-48)logSince(72-75)webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/tracings/editablemapping/EditableMappingIOService.scala (1)
initializeFromUploadedZip(131-160)
webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/tracings/editablemapping/EditableMappingIOService.scala (9)
util/src/main/scala/com/scalableminds/util/io/ZipIO.scala (2)
ZipIO(15-309)zip(70-87)webknossos-datastore/app/com/scalableminds/webknossos/datastore/datavault/VaultPath.scala (1)
VaultPath(19-100)webknossos-datastore/app/com/scalableminds/webknossos/datastore/helpers/UPath.scala (2)
UPath(54-96)fromLocalPath(80-80)webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/TSChunkCacheService.scala (1)
TSChunkCacheService(6-8)webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/files/TsTempFileService.scala (1)
TsTempFileService(8-11)webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/tracings/TracingDataStore.scala (1)
TracingDataStore(13-66)webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/tracings/editablemapping/EditableMappingService.scala (1)
create(134-141)webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/zarr3/Zarr3Array.scala (3)
Zarr3Array(16-40)Zarr3Array(42-184)open(18-40)webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/DatasetArray.scala (1)
readAsMultiArray(189-192)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: build-smoketest-push
- GitHub Check: backend-tests
- GitHub Check: frontend-tests
🔇 Additional comments (2)
webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/controllers/EditableMappingController.scala (2)
5-5: LGTM: timing + KeyValueStoreImplicits integrationUsing Instant for timing and mixing in KeyValueStoreImplicits looks correct and consistent with surrounding code.
Also applies to: 21-22, 34-36
193-211: Here’s the context around the parser settings in conf/application.conf:$ sed -n '1,30p' conf/application.conf play.http.parser { maxMemoryBuffer = 250M maxDiskBuffer = 1G } play.http.fileMimeTypes = ${play.http.fileMimeTypes}The defaults here apply to all HTTP body parsers (including raw). Since you’re using the raw body parser via
request.body.asRaw, these limits (250 MB in-memory, 1 GB on disk) already cover large ZIP uploads.Review comment update
Confirm existing parser limits suffice for ZIP uploads
Your global settings in
conf/application.confalready configure Play’s raw-body parser to allow up to 250 MB in memory and spill to disk up to 1 GB. No further changes to parser configuration are required unless you anticipate uploads larger than these thresholds.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
app/controllers/AnnotationIOController.scala (1)
223-243: Consider clarifying the comment for better readability.The multi-annotation merge path correctly returns
(List(...), 0L)as no editable mappings are present. The comment could be slightly more concise.Optional improvement:
- } else { // Multiple annotations with volume layers (but at most one each) were uploaded, they have no editable mappings. Merge those volume layers into one + } else { // Multiple annotations (each with at most one volume layer, none with editable mappings) were uploaded. Merge those volume layers into one.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
app/controllers/AnnotationIOController.scala(3 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
app/controllers/AnnotationIOController.scala (3)
util/src/main/scala/com/scalableminds/util/tools/Fox.scala (8)
map(281-284)map(377-377)Fox(30-230)Fox(232-305)successful(53-56)failure(58-62)serialCombined(95-99)serialCombined(99-111)app/models/annotation/WKRemoteTracingStoreClient.scala (2)
saveEditableMappingIfPresent(252-269)saveVolumeTracing(227-250)app/models/annotation/AnnotationUploadService.scala (2)
getEditableMappingEdgesZipFrom(32-35)getDataZipFrom(29-32)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: backend-tests
- GitHub Check: build-smoketest-push
- GitHub Check: frontend-tests
🔇 Additional comments (6)
app/controllers/AnnotationIOController.scala (6)
166-169: LGTM! Helper function correctly detects duplicate fallback layers.The logic filters layers with fallback layers and compares the count to the distinct count, which accurately identifies duplicates.
177-178: LGTM! Correct return value for empty case.Returns
(List(), 0L)matching the new signature where 0L indicates no version history.
179-180: LGTM! Validation prevents data corruption from duplicate fallback layers.Correctly rejects annotations where multiple volume layers share the same fallback segmentation layer, which would lead to ambiguity.
187-222: LGTM! Version counter logic correctly addresses the past critical issue.The implementation properly tracks update action versions across layers:
- Initializes
layerUpdatesStartVersionMutableto 1L (first version)- Calls
saveEditableMappingIfPresentwith the current version asstartVersion- Increments counter by
numberOfSavedVersions(0 when no mapping present)- Returns final counter value as
earliestAccessibleVersion(next available version, marking imported history as inaccessible)The use of
Fox.serialCombinedensures sequential processing with no race conditions. WhennumberOfSavedVersionsis 0, the counter correctly stays unchanged rather than resetting.
135-153: LGTM! Proper usage of the new return signature.The calling code correctly unpacks
(mergedVolumeLayers, earliestAccessibleVersion)and passesearliestAccessibleVersiontoAnnotationPrototo mark imported history as inaccessible, while using the pre-generatednewAnnotationIdfor annotation creation.
183-185: LGTM! Critical validation prevents unsupported merge scenarios. Message keyannotation.upload.editableMappingIncompleteInformationexists in conf/messages.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (1)
webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/tracings/editablemapping/EditableMappingIOService.scala (1)
164-196: Temp directory leak: ensure unzip directory is deletedThe directory from createDirectory() isn’t deleted. Large uploads will accumulate on disk. Always clean up after opening/reading Zarr.
private def unzipAndReadZarr(editedEdgesZip: File)(implicit ec: ExecutionContext, tc: TokenContext): Fox[(MultiArray, MultiArray)] = { - val unzippedDir = tempFileService.createDirectory() - for { + val unzippedDir = tempFileService.createDirectory() + val unzippedUPath = UPath.fromLocalPath(unzippedDir) + def cleanup(): Unit = try unzippedUPath.deleteRecursively() catch { case _: Throwable => () } + (for { _ <- ZipIO .unzipToDirectory(editedEdgesZip, unzippedDir, includeHiddenFiles = true, List.empty, truncateCommonPrefix = false, excludeFromPrefix = None) .toFox - unzippedVaultPath = new VaultPath(UPath.fromLocalPath(unzippedDir), FileSystemDataVault.create) + unzippedVaultPath = new VaultPath(unzippedUPath, FileSystemDataVault.create) editedEdgesZarrArray <- Zarr3Array.open(unzippedVaultPath / "edges/", DataSourceId("dummy", "unused"), "layer", None, None, None, chunkCacheService.sharedChunkContentsCache) edgeIsAdditionZarrArray <- Zarr3Array.open(unzippedVaultPath / "edgeIsAddition/", DataSourceId("dummy", "unused"), "layer", None, None, None, chunkCacheService.sharedChunkContentsCache) - numEdges <- editedEdgesZarrArray.datasetShape.flatMap(_.headOption).toFox - _ <- Fox.fromBool(numEdges.toInt.toLong == numEdges) ?~> "editableMappingFromZip.numEdges.exceedsInt" - editedEdges <- editedEdgesZarrArray.readAsMultiArray(offset = Array(0L, 0L), shape = Array(numEdges.toInt, 2)) - edgeIsAddition <- edgeIsAdditionZarrArray.readAsMultiArray(offset = 0L, shape = numEdges.toInt) - } yield (editedEdges, edgeIsAddition) + numEdges <- editedEdgesZarrArray.datasetShape.flatMap(_.headOption).toFox + _ <- Fox.fromBool(numEdges.toInt.toLong == numEdges) ?~> "editableMappingFromZip.numEdges.exceedsInt" + edgeFlagsCount <- edgeIsAdditionZarrArray.datasetShape.flatMap(_.headOption).toFox + _ <- Fox.fromBool(edgeFlagsCount == numEdges) ?~> "editableMappingFromZip.arrayLengthMismatch" + editedEdges <- editedEdgesZarrArray.readAsMultiArray(offset = Array(0L, 0L), shape = Array(numEdges.toInt, 2)) + edgeIsAddition <- edgeIsAdditionZarrArray.readAsMultiArray(offset = 0L, shape = numEdges.toInt) + } yield (editedEdges, edgeIsAddition)).andThen { _ => cleanup() } }
🧹 Nitpick comments (2)
webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/tracings/editablemapping/EditableMappingIOService.scala (2)
142-146: Nit: fix comment typo and clarity“splict” -> “split”. Consider explicitly stating “edgeIsAddition == true => merge; false => split”.
177-190: Optional: avoid polluting shared chunk cache for one-shot readsOpening Zarr arrays with sharedChunkContentsCache for a one-time full read can evict useful entries. If supported, pass a no-op/ephemeral cache or disable caching for this path.
Is there a lightweight cache option (e.g., a small LRU or NoCache) we can pass to Zarr3Array.open for single-use reads?
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/tracings/editablemapping/EditableMappingIOService.scala(2 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/tracings/editablemapping/EditableMappingIOService.scala (9)
util/src/main/scala/com/scalableminds/util/io/ZipIO.scala (2)
ZipIO(15-309)zip(70-87)webknossos-datastore/app/com/scalableminds/webknossos/datastore/datavault/VaultPath.scala (1)
VaultPath(19-100)webknossos-datastore/app/com/scalableminds/webknossos/datastore/helpers/UPath.scala (2)
UPath(54-96)fromLocalPath(80-80)webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/TSChunkCacheService.scala (1)
TSChunkCacheService(6-8)webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/files/TsTempFileService.scala (1)
TsTempFileService(8-11)webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/tracings/editablemapping/EditableMappingService.scala (1)
create(134-141)util/src/main/scala/com/scalableminds/util/time/Instant.scala (1)
now(48-48)webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/zarr3/Zarr3Array.scala (3)
Zarr3Array(16-40)Zarr3Array(42-184)open(18-40)webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/DatasetArray.scala (1)
readAsMultiArray(189-192)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: frontend-tests
- GitHub Check: backend-tests
- GitHub Check: build-smoketest-push
🔇 Additional comments (2)
webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/tracings/editablemapping/EditableMappingIOService.scala (2)
211-212: Verify mag=1 assumptionUsing Vec3Int.ones presumes mag 1 exists. If the dataset lacks mag 1, downstream consumers may misinterpret positions. Please confirm this is acceptable for update actions, or derive the minimum available mag.
154-162: Clarify return value of initializeFromUploadedZipThe method currently yields the count of saved version‐batches. If clients expect the final version number (startVersion + count – 1) instead, update the implementation and API contract accordingly; otherwise keep returning the batch count. Confirm the intended behavior for initializeFromUploadedZip.
...calableminds/webknossos/tracingstore/tracings/editablemapping/EditableMappingIOService.scala
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Testing went well besides
Test with multiple proofreading layers in one annotation
I was unable to create an annotation with multiple layers and each having an editable mapping active with at least one proofreading action done. I am only able to active the mapping for the second layer but the proofreading tool is not selectable. So not sure how to test this case to be honest
| None, | ||
| None, | ||
| None, | ||
| chunkCacheService.sharedChunkContentsCache) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok 👍, maybe add a comment to TSChunkCacheService.scala that this cache is pretty much unused at the current state bug might be useful in the future? Just to persist, that we had this discussion and decided to keep the cache althought it doesn't really have a benefit except helping to make the upload of an editbale mapping interface compliant reading a DatasetArray ?
|
You’re right, sorry, I removed this “step to test” now, I also found out that this is not a supported case anyway. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/TSChunkCacheService.scala (1)
6-10: Consider tracking the future work more formally.The comment provides helpful context about the current limited usage and small cache size. However, the directive to "remove this comment and increase the size" when more usages are added could be tracked more formally (e.g., as a GitHub issue or TODO comment) to ensure it's not forgotten.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/TSChunkCacheService.scala(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/TSChunkCacheService.scala (1)
webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/TracingStoreConfig.scala (2)
Tracingstore(14-32)Cache(28-30)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: backend-tests
- GitHub Check: build-smoketest-push
- GitHub Check: frontend-tests
🔇 Additional comments (2)
webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/TSChunkCacheService.scala (2)
1-4: LGTM!Package declaration and imports are correct and minimal.
11-13: LGTM!The implementation is minimal and correct, properly using dependency injection to read the cache size from configuration.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As I tested last time I reviewed and nothing really changed, I think retesting doesn't make sense.
As everything worked last time 🟢
URL of deployed dev instance (used for testing):
Steps to test:
Test with multiple proofreading layers in one annotation← unsupported caseTODOs:
Issues:
$PR_NUMBER.mdfile inunreleased_changesor use./tools/create-changelog-entry.py)